Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Trained on 1.4B data
# Trained on 1.4B data
CLIP ViT B 32 256x256 DataComp S34b B86k
MIT
This is a CLIP ViT-B/32 model trained on the DataComp-1B dataset using the OpenCLIP framework at 256x256 resolution, primarily for zero-shot image classification and image-text retrieval tasks.
Text-to-Image
C
laion
4,332
8
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase